On Message Packaging in Task Scheduling for Distributed Memory Parallel Machines
نویسندگان
چکیده
In this paper, we report a performance gap between a schedule with small makespan on the task scheduling model and the corresponding parallel program on distributed memory parallel machines. The main reason of the gap is the software overhead in the interprocessor communication. Therefore, speedup ratios of schedules on the model do not approximate well to those of parallel programs on the machines. The purpose of the paper is to get a task scheduling algorithm that generates a schedule with good approximation to the corresponding parallel program and with small makespan. For this purpose, we propose algorithm BCSH that generates only bulk synchronous schedules. In those schedules, no-communication phases and communication phases appear alternately. All interprocessor communications are done only in the latter phases, and thus the corresponding parallel programs can make better use of the message packaging technique easily. It reduces many software overheads of messages from a source processor to the same destination processor to almost one software overhead, and improves the performance of a parallel program significantly. Finally, we show some experimental results of performance gaps on BCSH, Kruatrachue’s algorithm DSH, and Ahmad et al’s algorithm ECPFD . The schedules by DSH and ECPFD are famous for their small makespans, but message packaging can not be effectively applied to the corresponding program. The results show that a bulk synchronous schedule with small makespan has advantages that the gap is small and the corresponding program is a high performance parallel one.
منابع مشابه
On the Performance Gap between a Task Schedule and its Corresponding Parallel Program
Consider a task scheduling problem of a given fine grained task graph on distributed-memory parallel machines. In this paper, we report that schedules with small makespan generated by existing task scheduling algorithms do not usually become fast parallel programs on distributed-memory parallel machines. That is caused by no consideration to message packaging. To obtain fast parallel programs u...
متن کاملA Task Scheduling Algorithm to Package Messages on Distributed Memory Parallel Machines
In this paper, we report a performance gap between a schedule with good makespan on the task scheduling model and the corresponding parallel program on distributed memory parallel machines. The main reason is the software overhead in the interprocessor communication. Therefore, speedup ratios of schedules on the model do not well approximate to those of parallel programs on the machines. The pu...
متن کاملScheduling Parallel Tasks with Intra-communication Overhead in a Grid Computing Environment
With the improvements in wide-area network performance and powerful computers, it is possible to integrate a large number of distributed machines belonging to different organizations as a single system, for example, a grid computing environment. A grid computing environment involves cooperation and sharing resources among distributed machines. Users may dispatch their tasks to remote computing ...
متن کاملA Message-Passing Distributed Memory Parallel Algorithm for a Dual-Code Thin Layer, Parabolized Navier-Stokes Solver
In this study, the results of parallelization of a 3-D dual code (Thin Layer, Parabolized Navier-Stokes solver) for solving supersonic turbulent flow around body and wing-body combinations are presented. As a serial code, TLNS solver is very time consuming and takes a large part of memory due to the iterative and lengthy computations. Also for complicated geometries, an exceeding number of grid...
متن کاملGreen Energy-aware task scheduling using the DVFS technique in Cloud Computing
Nowdays, energy consumption as a critical issue in distributed computing systems with high performance has become so green computing tries to energy consumption, carbon footprint and CO2 emissions in high performance computing systems (HPCs) such as clusters, Grid and Cloud that a large number of parallel. Reducing energy consumption for high end computing can bring various benefits such as red...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Int. J. Found. Comput. Sci.
دوره 12 شماره
صفحات -
تاریخ انتشار 2001